Traditional image-centered methods of plant identification could be confused due to various views, uneven illuminations, and\ngrowth cycles. To tolerate the significant intraclass variances, the convolutional recurrent neural networks (C-RNNs) are proposed\nfor observation-centered plant identification to mimic human behaviors. The C-RNN model is composed of two components: the\nconvolutional neural network (CNN) backbone is used as a feature extractor for images, and the recurrent neural network (RNN)\nunits are built to synthesizemultiviewfeatures fromeach image for final prediction. Extensive experiments are conducted to explore\nthe best combination of CNN and RNN. All models are trained end-to-end with 1 to 3 plant images of the same observation by\ntruncated back propagation through time. The experiments demonstrate that the combination of MobileNet and Gated Recurrent\nUnit (GRU) is the best trade-off of classification accuracy and computational overhead on the Flavia dataset. On the holdout test\nset, the mean 10-fold accuracy with 1, 2, and 3 input leaves reached 99.53%, 100.00%, and 100.00%, respectively. On the BJFU100\ndataset, the C-RNN model achieves the classification rate of 99.65% by two-stage end-to-end training. The observation-centered\nmethod based on the C-RNNs shows potential to further improve plant identification accuracy.
Loading....